1,080 research outputs found

    Algorithm Portfolio for Individual-based Surrogate-Assisted Evolutionary Algorithms

    Full text link
    Surrogate-assisted evolutionary algorithms (SAEAs) are powerful optimisation tools for computationally expensive problems (CEPs). However, a randomly selected algorithm may fail in solving unknown problems due to no free lunch theorems, and it will cause more computational resource if we re-run the algorithm or try other algorithms to get a much solution, which is more serious in CEPs. In this paper, we consider an algorithm portfolio for SAEAs to reduce the risk of choosing an inappropriate algorithm for CEPs. We propose two portfolio frameworks for very expensive problems in which the maximal number of fitness evaluations is only 5 times of the problem's dimension. One framework named Par-IBSAEA runs all algorithm candidates in parallel and a more sophisticated framework named UCB-IBSAEA employs the Upper Confidence Bound (UCB) policy from reinforcement learning to help select the most appropriate algorithm at each iteration. An effective reward definition is proposed for the UCB policy. We consider three state-of-the-art individual-based SAEAs on different problems and compare them to the portfolios built from their instances on several benchmark problems given limited computation budgets. Our experimental studies demonstrate that our proposed portfolio frameworks significantly outperform any single algorithm on the set of benchmark problems

    Distributed Multi-Task Relationship Learning

    Full text link
    Multi-task learning aims to learn multiple tasks jointly by exploiting their relatedness to improve the generalization performance for each task. Traditionally, to perform multi-task learning, one needs to centralize data from all the tasks to a single machine. However, in many real-world applications, data of different tasks may be geo-distributed over different local machines. Due to heavy communication caused by transmitting the data and the issue of data privacy and security, it is impossible to send data of different task to a master machine to perform multi-task learning. Therefore, in this paper, we propose a distributed multi-task learning framework that simultaneously learns predictive models for each task as well as task relationships between tasks alternatingly in the parameter server paradigm. In our framework, we first offer a general dual form for a family of regularized multi-task relationship learning methods. Subsequently, we propose a communication-efficient primal-dual distributed optimization algorithm to solve the dual problem by carefully designing local subproblems to make the dual problem decomposable. Moreover, we provide a theoretical convergence analysis for the proposed algorithm, which is specific for distributed multi-task relationship learning. We conduct extensive experiments on both synthetic and real-world datasets to evaluate our proposed framework in terms of effectiveness and convergence.Comment: To appear in KDD 201

    Developmental constraints, innovations and robustness

    Get PDF
    During my PhD, I have been working on Evo-Devo patterns (especially the debate around the hourglass model) in transcriptomes, with an emphasis on adaptation. I have characterized patterns in model organisms in terms of constraints and especially in terms of positive selection. I found that the phylotypic stage (a stage in mid-embryonic development) is an evolutionary lockdown, with stronger purifying selection and less positive selection than other stages in terms of the evolution of protein sequences and of regulatory elements. To study the adaptive evolution of gene regulation during development, I have developed a machine leaning based in silico mutagenesis approach to detect positive selection on regulatory elements. In addition to transcriptome evolution, I have been working on the tension between precision and stochasticity of gene expression during development. More precisely, I have shown that expression noise follows an hourglass pattern, with lower noise at the phylotypic stage. This pattern can be explained by stronger histone modification mediated noise control at this stage. In addition, I propose that histone modifications contribute to mutational robustness in regulatory elements, and thus to conserved expression levels. These results provide insight into the role of robustness in the phenotypic and genetic patterns of evolutionary conservation in animal developmen
    corecore